430 research outputs found

    臨床ビッグデータに基づくオランザピン誘発脂質異常症に対するビタミンDの予防作用の解明

    Get PDF
    京都大学新制・課程博士博士(薬科学)甲第24551号薬科博第168号新制||薬科||18(附属図書館)京都大学大学院薬学研究科薬科学専攻(主査)教授 金子 周司, 教授 竹島 浩, 教授 上杉 志成学位規則第4条第1項該当Doctor of Pharmaceutical SciencesKyoto UniversityDFA

    Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises

    Full text link
    Recently, several studies consider the stochastic optimization problem but in a heavy-tailed noise regime, i.e., the difference between the stochastic gradient and the true gradient is assumed to have a finite pp-th moment (say being upper bounded by σp\sigma^{p} for some σ0\sigma\geq0) where p(1,2]p\in(1,2], which not only generalizes the traditional finite variance assumption (p=2p=2) but also has been observed in practice for several different tasks. Under this challenging assumption, lots of new progress has been made for either convex or nonconvex problems, however, most of which only consider smooth objectives. In contrast, people have not fully explored and well understood this problem when functions are nonsmooth. This paper aims to fill this crucial gap by providing a comprehensive analysis of stochastic nonsmooth convex optimization with heavy-tailed noises. We revisit a simple clipping-based algorithm, whereas, which is only proved to converge in expectation but under the additional strong convexity assumption. Under appropriate choices of parameters, for both convex and strongly convex functions, we not only establish the first high-probability rates but also give refined in-expectation bounds compared with existing works. Remarkably, all of our results are optimal (or nearly optimal up to logarithmic factors) with respect to the time horizon TT even when TT is unknown in advance. Additionally, we show how to make the algorithm parameter-free with respect to σ\sigma, in other words, the algorithm can still guarantee convergence without any prior knowledge of σ\sigma

    Near-Optimal Non-Convex Stochastic Optimization under Generalized Smoothness

    Full text link
    The generalized smooth condition, (L0,L1)(L_{0},L_{1})-smoothness, has triggered people's interest since it is more realistic in many optimization problems shown by both empirical and theoretical evidence. Two recent works established the O(ϵ3)O(\epsilon^{-3}) sample complexity to obtain an O(ϵ)O(\epsilon)-stationary point. However, both require a large batch size on the order of ploy(ϵ1)\mathrm{ploy}(\epsilon^{-1}), which is not only computationally burdensome but also unsuitable for streaming applications. Additionally, these existing convergence bounds are established only for the expected rate, which is inadequate as they do not supply a useful performance guarantee on a single run. In this work, we solve the prior two problems simultaneously by revisiting a simple variant of the STORM algorithm. Specifically, under the (L0,L1)(L_{0},L_{1})-smoothness and affine-type noises, we establish the first near-optimal O(log(1/(δϵ))ϵ3)O(\log(1/(\delta\epsilon))\epsilon^{-3}) high-probability sample complexity where δ(0,1)\delta\in(0,1) is the failure probability. Besides, for the same algorithm, we also recover the optimal O(ϵ3)O(\epsilon^{-3}) sample complexity for the expected convergence with improved dependence on the problem-dependent parameter. More importantly, our convergence results only require a constant batch size in contrast to the previous works.Comment: The whole paper is rewritten with new results in V
    corecore